Boosting Graph Neural Networks via Adaptive Knowledge Distillation

نویسندگان

چکیده

Graph neural networks (GNNs) have shown remarkable performance on diverse graph mining tasks. While sharing the same message passing framework, our study shows that different GNNs learn distinct knowledge from graph. This implies potential improvement by distilling complementary multiple models. However, distillation (KD) transfers high-capacity teachers to a lightweight student, which deviates scenario: are often shallow. To transfer effectively, we need tackle two challenges: how compact student with capacity; and, exploit GNN's own learning ability. In this paper, propose novel adaptive KD called BGNN, sequentially into GNN. We also introduce an temperature module and weight boosting module. These modules guide appropriate for effective learning. Extensive experiments demonstrated effectiveness of BGNN. particular, achieve up 3.05% node classification 6.35% over vanilla GNNs.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Graph Convolutional Neural Networks

Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking dat...

متن کامل

Data-Free Knowledge Distillation for Deep Neural Networks

Recent advances in model compression have provided procedures for compressing large neural networks to a fraction of their original size while retaining most if not all of their accuracy. However, all of these approaches rely on access to the original training set, which might not always be possible if the network to be compressed was trained on a very large dataset, or on a dataset whose relea...

متن کامل

Growing adaptive neural networks with graph grammars

This paper describes how graph grammars may be used to grow neural networks. The grammar facilitates a very compact and declarative description of every aspect of a neural architecture; this is important from a software/neural engineering point of view, since the descriptions are much easier to write and maintain than programs written in a high-level language , such as C++, and do not require p...

متن کامل

Adaptive Boosting of Neural Networks for Character Recognition

”Boosting” is a general method for improving the performance of any learning algorithm that consistently generates classifiers which need to perform only slightly better than random guessing. A recently proposed and very promising boosting algorithm is AdaBoost [5]. It has been applied with great success to several benchmark machine learning problems using rather simple learning algorithms [4],...

متن کامل

Training Methods for Adaptive Boosting of Neural Networks

"Boosting" is a general method for improving the performance of any learning algorithm that consistently generates classifiers which need to perform only slightly better than random guessing. A recently proposed and very promising boosting algorithm is AdaBoost [5]. It has been applied with great success to several benchmark machine learning problems using rather simple learning algorithms [4],...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i6.25944